Goto

Collaborating Authors

 definition 6




Supplementary Material

Neural Information Processing Systems

The supplementary material is organized as follows. First, we prove Proposition 1 and Theorem 1. In this section we prove Proposition 1, and some preliminary lemmas. Definition 4. Let the function Algorithm 1 for all i [m] and k 0. Let us define the following terms: g We will make use of the following notation for the history of the method. These samples are assumed to be independent across clients.


Psychometric Tests for AI Agents and Their Moduli Space

Chojecki, Przemyslaw

arXiv.org Artificial Intelligence

We develop a moduli-theoretic view of psychometric test batteries for AI agents and connect it explicitly to the AAI score developed previously. First, we make precise the notion of an AAI functional on a battery and set out axioms that any reasonable autonomy/general intelligence score should satisfy. Second, we show that the composite index ('AAI-Index') defined previously is a special case of our AAI functional. Third, we introduce the notion of a cognitive core of an agent relative to a battery and define the associated AAI$_{\textrm{core}}$ score as the restriction of an AAI functional to that core. Finally, we use these notions to describe invariants of batteries under evaluation-preserving symmetries and outline how moduli of equivalent batteries are organized.



Are Hallucinations Bad Estimations?

Liu, Hude, Hu, Jerry Yao-Chieh, Zhang, Jennifer Yuntong, Song, Zhao, Liu, Han

arXiv.org Machine Learning

We formalize hallucinations in generative models as failures to link an estimate to any plausible cause. Under this interpretation, we show that even loss-minimizing optimal estimators still hallucinate. We confirm this with a general high probability lower bound on hallucinate rate for generic data distributions. This reframes hallucination as structural misalignment between loss minimization and human-acceptable outputs, and hence estimation errors induced by miscalibration. Experiments on coin aggregation, open-ended QA, and text-to-image support our theory.


Towards Privacy-Aware Bayesian Networks: A Credal Approach

Rocchi, Niccolò, Stella, Fabio, de Campos, Cassio

arXiv.org Artificial Intelligence

Bayesian networks (BN) are probabilistic graphical models that enable efficient knowledge representation and inference. These have proven effective across diverse domains, including healthcare, bioinformatics and economics. The structure and parameters of a BN can be obtained by domain experts or directly learned from available data. However, as privacy concerns escalate, it becomes increasingly critical for publicly released models to safeguard sensitive information in training data. Typically, released models do not prioritize privacy by design. In particular, tracing attacks from adversaries can combine the released BN with auxiliary data to determine whether specific individuals belong to the data from which the BN was learned. State-of-the-art protection tecniques involve introducing noise into the learned parameters. While this offers robust protection against tracing attacks, it significantly impacts the model's utility, in terms of both the significance and accuracy of the resulting inferences. Hence, high privacy may be attained at the cost of releasing a possibly ineffective model. This paper introduces credal networks (CN) as a novel solution for balancing the model's privacy and utility. After adapting the notion of tracing attacks, we demonstrate that a CN enables the masking of the learned BN, thereby reducing the probability of successful attacks. As CNs are obfuscated but not noisy versions of BNs, they can achieve meaningful inferences while safeguarding privacy. Moreover, we identify key learning information that must be concealed to prevent attackers from recovering the underlying BN. Finally, we conduct a set of numerical experiments to analyze how privacy gains can be modulated by tuning the CN hyperparameters. Our results confirm that CNs provide a principled, practical, and effective approach towards the development of privacy-aware probabilistic graphical models.


Approximating High-Dimensional Earth Mover's Distance as Fast as Closest Pair

Beretta, Lorenzo, Cohen-Addad, Vincent, Jayaram, Rajesh, Waingarten, Erik

arXiv.org Artificial Intelligence

We give a reduction from $(1+\varepsilon)$-approximate Earth Mover's Distance (EMD) to $(1+\varepsilon)$-approximate Closest Pair (CP). As a consequence, we improve the fastest known approximation algorithm for high-dimensional EMD. Here, given $p\in [1, 2]$ and two sets of $n$ points $X,Y \subseteq (\mathbb R^d,\ell_p)$, their EMD is the minimum cost of a perfect matching between $X$ and $Y$, where the cost of matching two vectors is their $\ell_p$ distance. Further, CP is the basic problem of finding a pair of points realizing $\min_{x \in X, y\in Y} ||x-y||_p$. Our contribution is twofold: we show that if a $(1+\varepsilon)$-approximate CP can be computed in time $n^{2-ϕ}$, then a $1+O(\varepsilon)$ approximation to EMD can be computed in time $n^{2-Ω(ϕ)}$; plugging in the fastest known algorithm for CP [Alman, Chan, Williams FOCS'16], we obtain a $(1+\varepsilon)$-approximation algorithm for EMD running in time $n^{2-\tildeΩ(\varepsilon^{1/3})}$ for high-dimensional point sets, which improves over the prior fastest running time of $n^{2-Ω(\varepsilon^2)}$ [Andoni, Zhang FOCS'23]. Our main technical contribution is a sublinear implementation of the Multiplicative Weights Update framework for EMD. Specifically, we demonstrate that the updates can be executed without ever explicitly computing or storing the weights; instead, we exploit the underlying geometric structure to perform the updates implicitly.


A Regression Approach to Learning Augmented Online Algorithms (Supplementary)

K. Anand, R. Ge, A. Kumar, D. Panigrahi

Neural Information Processing Systems

Do the main claims made in the abstract and introduction accurately reflect the paper's Did you discuss any potential negative societal impacts of your work? Did you include complete proofs of all theoretical results? If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they Did you include the total amount of compute and the type of resources used (e.g., type Did you include any new assets either in the supplemental material or as a URL?[N/A] Did you discuss whether and how consent was obtained from people whose data you're If you used crowdsourcing or conducted research with human subjects... (a) In this section, we prove Theorems 6 and 12 which give upper bounds on the sample complexity in the standard and agnostic settings respectively. The following is a well-known result that relates covering numbers to the pseudo dimension (cf. A.1 The Standard Model: Proof of Theorem 6 First, we relate covering numbers to this error measure.


12112_ood_link_prediction_generaliza

ChouYangze

Neural Information Processing Systems

In Appendix A, we introduce more related work that has not been discussed in the main paper. Appendix B, we provide more details in experiments set up and model training. In Appendix D, we show large random and real world graphs have few isomorphic nodes. Ability of GNNs to emulate graph algorithms as graph sizes increase. Our experiments show that the max aggregator, just like the sum aggregators, shows poor OOD performance as test graph sizes increase.